191 research outputs found
Portfolio management using partially observable Markov decision process
Portfolio theory is concerned with how an investor should divide his wealth among different securities. This problem was first formulated by Markowitz in 1952. Since then, other more sophisticated formulations have been introduced. However, practical issues like transactions costs and their effects on the portfolio choice in multiple stages have not been widely considered. In our work, we show that the portfolio management problem is appropriately formulated as a Partially Observable Markov Decision Process. We use a Monte Carlo method called "rollout" to approximate an optimal strategy for making decisions. To capture the behavior of stock prices over time, we use two well known models.2nd place, IS&T Graduate Group
Data-graph repairs: the preferred approach
Repairing inconsistent knowledge bases is a task that has been assessed, with
great advances over several decades, from within the knowledge representation
and reasoning and the database theory communities. As information becomes more
complex and interconnected, new types of repositories, representation languages
and semantics are developed in order to be able to query and reason about it.
Graph databases provide an effective way to represent relationships among data,
and allow processing and querying these connections efficiently. In this work,
we focus on the problem of computing preferred (subset and superset) repairs
for graph databases with data values, using a notion of consistency based on a
set of Reg-GXPath expressions as integrity constraints. Specifically, we study
the problem of computing preferred repairs based on two different preference
criteria, one based on weights and the other based on multisets, showing that
in most cases it is possible to retain the same computational complexity as in
the case where no preference criterion is available for exploitation.Comment: arXiv admin note: text overlap with arXiv:2206.0750
On the complexity of finding set repairs for data-graphs
In the deeply interconnected world we live in, pieces of information link
domains all around us. As graph databases embrace effectively relationships
among data and allow processing and querying these connections efficiently,
they are rapidly becoming a popular platform for storage that supports a wide
range of domains and applications. As in the relational case, it is expected
that data preserves a set of integrity constraints that define the semantic
structure of the world it represents. When a database does not satisfy its
integrity constraints, a possible approach is to search for a 'similar'
database that does satisfy the constraints, also known as a repair. In this
work, we study the problem of computing subset and superset repairs for graph
databases with data values using a notion of consistency based on a set of
Reg-GXPath expressions as integrity constraints. We show that for positive
fragments of Reg-GXPath these problems admit a polynomial-time algorithm, while
the full expressive power of the language renders them intractable.Comment: 35 pages , including Appendi
An epistemic approach to model uncertainty in data-graphs
Graph databases are becoming widely successful as data models that allow to
effectively represent and process complex relationships among various types of
data. As with any other type of data repository, graph databases may suffer
from errors and discrepancies with respect to the real-world data they intend
to represent. In this work we explore the notion of probabilistic unclean graph
databases, previously proposed for relational databases, in order to capture
the idea that the observed (unclean) graph database is actually the noisy
version of a clean one that correctly models the world but that we know
partially. As the factors that may be involved in the observation can be many,
e.g, all different types of clerical errors or unintended transformations of
the data, we assume a probabilistic model that describes the distribution over
all possible ways in which the clean (uncertain) database could have been
polluted. Based on this model we define two computational problems: data
cleaning and probabilistic query answering and study for both of them their
corresponding complexity when considering that the transformation of the
database can be caused by either removing (subset) or adding (superset) nodes
and edges.Comment: 25 pages, 3 figure
Recommended from our members
Characteristics of stable chronic obstructive pulmonary disease patients in the pulmonology clinics of seven Asian cities
Background and objectives Chronic obstructive pulmonary disease (COPD) is responsible for significant morbidity and mortality worldwide. We evaluated the characteristics of stable COPD patients in the pulmonology clinics of seven Asian cities and also evaluated whether the exposure to biomass fuels and dusty jobs were related to respiratory symptoms, airflow limitation, and quality of life in the COPD patients. Methods: This cross-sectional observational study recruited 922 COPD patients from seven cities of Asia. The patients underwent spirometry and were administered questionnaires about their exposure to cigarette smoking, biomass fuels, and dusty jobs in addition to respiratory symptoms and health related quality of life. Results: Of the patients, there appeared to be variations from city to city in the history of exposure to biomass fuels and dusty jobs and also in respiratory symptoms of cough, phlegm, wheeze, and dyspnea. These symptoms were more frequent in those COPD patients with a history of exposure to biomass fuels than without and those with a history of exposure to dusty jobs than without (P < 0.01 for all comparisons). Airflow limitation was more severe in those COPD patients with a history of exposure to biomass fuels than without (52.2% predicted versus 55.9% of post-bronchodilator forced expiratory volume in 1 second [FEV1], P = 0.009); quality of life was poorer in those with exposure to biomass fuels than without (40.4 versus 36.2 of the St George’s Respiratory Questionnaire [SGRQ] total score, P = 0.001). Airflow limitation was more severe in those COPD patients with a history of exposure to dusty jobs than without (51.2% predicted versus 57.3% of post-bronchodilator FEV1, P < 0.001); quality of life was poorer in those with dusty jobs than without (41.0 versus 34.6 of SGRQ score, P = 0.006). Conclusion: In Asian cities, the characteristics of COPD patients vary and the history of exposure to biomass fuels or dusty jobs was related to frequency of symptoms, severe airflow limitation, and poor quality of life
The Super-Earth Opportunity - Search for Habitable Exoplanets in the 2020s
The recent discovery of a staggering diversity of planets beyond the Solar
System has brought with it a greatly expanded search space for habitable
worlds. The Kepler exoplanet survey has revealed that most planets in our
interstellar neighborhood are larger than Earth and smaller than Neptune.
Collectively termed super-Earths and mini-Neptunes, some of these planets may
have the conditions to support liquid water oceans, and thus Earth-like
biology, despite differing in many ways from our own planet. In addition to
their quantitative abundance, super-Earths are relatively large and are thus
more easily detected than true Earth twins. As a result, super-Earths represent
a uniquely powerful opportunity to discover and explore a panoply of
fascinating and potentially habitable planets in 2020 - 2030 and beyond.Comment: Science white paper submitted to the 2020 Astronomy and Astrophysics
Decadal Surve
Adjunctive benzodiazepine treatment of hospitalized schizophrenia patients in Asia from 2001 to 2008
10.1017/S146114571000163XThe International Journal of Neuropsychopharmacology146735-74
- …